Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Math Biosci Eng ; 21(2): 2970-2990, 2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38454715

RESUMO

Training neural networks by using conventional supervised backpropagation algorithms is a challenging task. This is due to significant limitations, such as the risk for local minimum stagnation in the loss landscape of neural networks. That may prevent the network from finding the global minimum of its loss function and therefore slow its convergence speed. Another challenge is the vanishing and exploding gradients that may happen when the gradients of the loss function of the model become either infinitesimally small or unmanageably large during the training. That also hinders the convergence of the neural models. On the other hand, the traditional gradient-based algorithms necessitate the pre-selection of learning parameters such as the learning rates, activation function, batch size, stopping criteria, and others. Recent research has shown the potential of evolutionary optimization algorithms to address most of those challenges in optimizing the overall performance of neural networks. In this research, we introduce and validate an evolutionary optimization framework to train multilayer perceptrons, which are simple feedforward neural networks. The suggested framework uses the recently proposed evolutionary cooperative optimization algorithm, namely, the dynamic group-based cooperative optimizer. The ability of this optimizer to solve a wide range of real optimization problems motivated our research group to benchmark its performance in training multilayer perceptron models. We validated the proposed optimization framework on a set of five datasets for engineering applications, and we compared its performance against the conventional backpropagation algorithm and other commonly used evolutionary optimization algorithms. The simulations showed the competitive performance of the proposed framework for most examined datasets in terms of overall performance and convergence. For three benchmarking datasets, the proposed framework provided increases of 2.7%, 4.83%, and 5.13% over the performance of the second best-performing optimizers, respectively.

2.
Biomed Phys Eng Express ; 9(5)2023 07 12.
Artigo em Inglês | MEDLINE | ID: mdl-37402356

RESUMO

Biological neurons are typically modeled using the Hodgkin-Huxley formalism, which requires significant computational power to simulate. However, since realistic neural network models require thousands of synaptically coupled neurons, a faster approach is needed. Discrete dynamical systems are promising alternatives to continuous models, as they can simulate neuron activity in far fewer steps. Many existing discrete models are based on Poincaré-map-like approaches, which trace periodic activity at a cross section of the cycle. However, this approach is limited to periodic solutions. Biological neurons have many key properties beyond periodicity, such as the minimum applied current required for a resting cell to generate an action potential. To address these properties, we propose a discrete dynamical system model of a biological neuron that incorporates the threshold dynamics of the Hodgkin-Huxley model, the logarithmic relationship between applied current and frequency, modifications to relaxation oscillators, and spike-frequency adaptation in response to modulatory hyperpolarizing currents. It is important to note that several critical parameters are transferred from the continuous model to our proposed discrete dynamical system. These parameters include the membrane capacitance, leak conductance, and maximum conductance values for sodium and potassium ion channels, which are essential for accurately simulating the behavior of biological neurons. By incorporating these parameters into our model, we can ensure that it closely approximates the continuous model's behavior, while also offering a more computationally efficient alternative for simulating neural networks.


Assuntos
Modelos Neurológicos , Neurônios , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Redes Neurais de Computação
3.
Behav Brain Res ; 383: 112459, 2020 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-31972186

RESUMO

Humans and animals do not only keep track of time intervals but they can also make decisions about durations. Temporal bisection is a psychophysical task that is widely used to assess the latter ability via categorization of durations as short or long. Many existing models of performance in temporal bisection primarily account for choice proportions and tend to overlook the associated response times. We propose a time-cell neural network that implements both interval timing and temporal categorization. The proposed model can keep track of time intervals based on lurching wave activity, it can learn the reference durations along with their association with different categorization responses, and finally, it can carry out the comparison of arbitrary intermediate durations to the reference durations. We compared the model's predictions about choice behavior and response times to the empirical data previously gathered from rats. We showed that this time-cell neural network can predict the canonical behavioral signatures of temporal bisection performance. Specifically, (a) the proposed model can account for the sigmoidal relationship between the probability of the long choices and the test durations, (b) the superposition of choice functions on a relative time scale, (c) the localization of the point of subjective equality at the geometric mean of the reference durations, and (d) the differential modulation of short and long categorization response times as a function of the test durations.


Assuntos
Tomada de Decisões , Hipocampo , Redes Neurais de Computação , Neurônios , Percepção do Tempo , Animais , Comportamento de Escolha , Aprendizagem , Ratos , Tempo de Reação
4.
Front Comput Neurosci ; 12: 111, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30760994

RESUMO

Many organisms can time intervals flexibly on average with high accuracy but substantial variability between the trials. One of the core psychophysical features of interval timing functions relates to the signatures of this timing variability; for a given individual, the standard deviation of timed responses/time estimates is nearly proportional to their central tendency (scalar property). Many studies have aimed at elucidating the neural basis of interval timing based on the neurocomputational principles in a fashion that would explain the scalar property. Recent experimental evidence shows that there is indeed a specialized neural system for timekeeping. This system, referred to as the "time cells," is composed of a group of neurons that fire sequentially as a function of elapsed time. Importantly, the time interval between consecutively firing time cell ensembles has been shown to increase with more elapsed time. However, when the subjective time is calculated by adding the distributions of time intervals between these sequentially firing time cell ensembles, the standard deviation would be compressed by the square root function. In light of this information the question becomes, "How should the signaling between the sequentially firing time cell ensembles be for the resulting variability to increase linearly with time as required by the scalar property?" We developed a simplified model of time cells that offers a mechanism for the synaptic communication of the sequentially firing neurons to address this ubiquitous property of interval timing. The model is composed of a single layer of time cells formulated in the form of integrate-and-fire neurons with feed-forward excitatory connections. The resulting behavior is simple neural wave activity. When this model is simulated with noisy conductances, the standard deviation of the time cell spike times increases proportionally to the mean of the spike-times. We demonstrate that this statistical property of the model outcomes is robustly observed even when the values of the key model parameters are varied.

5.
Neural Netw ; 90: 72-82, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28390225

RESUMO

Persistent irregular activity is defined as elevated irregular neural discharges in the brain in such a way that while the average network activity displays high frequency oscillations, the participating neurons display irregular and low frequency oscillations. This type of activity is observed in many brain regions like prefrontal cortex that plays a role in working memory. Previous studies have shown that large networks with sparse connections, networks with strong noise and persistent inhibition and networks with structured synaptic connections display persistent-irregular activity. However, experimental studies show that, not all brain regions obey these assumptions. In this study we show that a small network of excitatory-inhibitory neurons with random synaptic connections can reproduce persistent-irregular activity. In particular, the model shows that less than perfect rebound pattern in excitatory cells, coincident-sensitive inhibitory cells and sparse synaptic inhibition can account for persistent-irregular activity in an excitatory-inhibitory neural network with randomly assigned synaptic connections.


Assuntos
Modelos Neurológicos , Inibição Neural/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Animais , Humanos , Rede Nervosa/fisiologia , Córtex Pré-Frontal/fisiologia
6.
Philos Trans R Soc Lond B Biol Sci ; 369(1637): 20120461, 2014 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-24446495

RESUMO

Humans and animals time intervals from seconds to minutes with high accuracy but limited precision. Consequently, time-based decisions are inevitably subjected to our endogenous timing uncertainty, and thus require temporal risk assessment. In this study, we tested temporal risk assessment ability of humans when participants had to withhold each subsequent response for a minimum duration to earn reward and each response reset the trial time. Premature responses were not penalized in Experiment 1 but were penalized in Experiment 2. Participants tried to maximize reward within a fixed session time (over eight sessions) by pressing a key. No instructions were provided regarding the task rules/parameters. We evaluated empirical performance within the framework of optimality that was based on the level of endogenous timing uncertainty and the payoff structure. Participants nearly tracked the optimal target inter-response times (IRTs) that changed as a function of the level of timing uncertainty and maximized the reward rate in both experiments. Acquisition of optimal target IRT was rapid and abrupt without any further improvement or worsening. These results constitute an example of optimal temporal risk assessment performance in a task that required finding the optimal trade-off between the 'speed' (timing) and 'accuracy' (reward probability) of timed responses for reward maximization.


Assuntos
Tomada de Decisões/fisiologia , Risco , Percepção do Tempo/fisiologia , Feminino , Jogos Experimentais , Humanos , Masculino , Modelos Estatísticos , Estimulação Luminosa , Análise de Regressão , Recompensa , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...